perm filename NOTES.APE[LSP,JRA]2 blob sn#255917 filedate 1977-01-04 generic text, type C, neo UTF8
COMMENT ⊗   VALID 00009 PAGES
C REC  PAGE   DESCRIPTION
C00001 00001
C00002 00002	.SS(Review and Reflection,,P85:)
C00019 00003	.<<c w mmorris>>
C00032 00004	.<<abstract syntax>>
C00040 00005	.<<begin lambda>>
C00065 00006	.<<lisp borrows>>
C00075 00007
C00083 00008	In  particular we can assume that the LISP primitives
C00093 00009	To describe the evaluation of a function-call in LISP we must add
C00101 ENDMK
C⊗;
.SS(Review and Reflection,,P85:)
By way of review we
sketch the basic LISP evaluator of {yonss(P6)}:
%3eval%* plus the additional artifacts for %3label and function%*.

There are two arguments to %3eval%*: a %2form%*⊗↓Throughout this section we
will say "form", "variable", "λ-expression", etc.  rather than "a
representation of a" ... "form", "variable", "λ-expression", etc. No
confusion should result, but remember that we %2are%* speaking imprecisely.←, that
is, an expression which can be evaluated; and  an association list or
%2symbol table%*. If the form is a constant, return that
form. If the form is a variable, find the value of the variable in the current
environment.
If the form is a conditional expression, then evaluate it
according to the semantics of conditional expressions.

The form might also be a functional argument. In this case evaluation consists of
associating the current environment 
with the function and returning that construct as value; 
in LISP this is done with the %3funarg%* device.
Any other form seen by %3eval%* is assumed to be an
application, that is, a  function followed by arguments.
The arguments are evaluated from left-to-right and the function is then applied
to these arguments. 

The part of the evaluator which handles function application is called %3apply%*.
%3apply%* takes three arguments: a LISP function, a list of evaluated arguments, and
the current symbol table. If the function is one of the five LISP primitives
then the appropriate action is carried out. If the function is a λ-expression
then bind the formal parameters (the λ-variables) to the evaluated arguments and
evaluate the body of the function. The function might also be the result of a
functional argument binding; in this case apply the function to the arguments
and use 
the saved environment, rather than  the current environment, to search for
non-local bindings.
If we are applying  the %3label%1 operator, 
recalling {yon(P192)}, we  build a %3funarg%1-triple 
and new environment such that the environment bound in the triple is the
new environment. We then apply the function to the arguments in this new 
environment.

.P227:
If the function has a name 
we look up that name in the current environment.
Currently we expect that value to be a λ-expression, which is then applied. 
However, since function names are just variables, there is no reason that the
value of a function name could not be a simple value, say an atom, and 
%2that%* value can be
applied to the arguments. This process can continue recursively until
a true function-object is uncovered.  Examination of %3apply%* on {yon(P69)}
will show that %3apply[X;#((A#B))#;#((X#.#CAR)#...#)]%* %2will%* be handled correctly.
The natural extension of this idea is to allow a %2⊗→computed function↔←%*. That
is, if the function passed to %3apply%* is not recognized as one of the preceding
cases, then evaluate the function-part until it is recognized. Thus
we will allow such forms as:
.BEGIN CENTERIT;SELECT 3;

←((CAR#(QUOTE#(CAR#(A#.#B))))#(QUOTE#(A#.#B)))
.END
.BEGIN GROUP;TABIT3(25,34,42);
The addition of computed functions modifies %3apply%1 ({yon(P69)}) slightly:
.SELECT 3;

apply <= λ[[fn;args;environ]\[iscar[fn] → car[arg%41%*[args]];
\ iscons[fn] → cons[arg%41%*[args];arg%42%*[args]];
\        ...                ... 
\ islambda[fn] → eval[\body[fn];
\\\pairlis[vars[fn];args;environ]];
\ %Et%3 → apply[\eval[fn;environ];
\\args;
\\environ] ]]
.END

The subset of LISP which is  captured by this evaluator is 
 a stong and useful language even though it lacks several of the more
common programming language features⊗↓It is "strong", both practically
and theoretically. It is sufficiently powerful
to be "universal" in the sense that all computable functions
are representable in LISP. In fact the %3eval-apply%1 pair
represents a "universal function", capable of simulating
the behavior of any other computable function. The
usual arguments about the halting problem#(⊗↑[Rog#67]↑) and related matters
are easily expressed and proved in this LISP. Indeed, the
original motivation for introducing the M-to-S expression
mapping was to express the language constructs in the
data domain. Only then was "S-expression#LISP" used as the programming
language. It was S.#Russell who convinced Mc#Carthy that the
theoretical device repreresented in %3eval%1 and %3apply%1
could in fact be programmed on the IBM704.←. This subset
is called the %2applicative subset of LISP%1, since its
computational ability is based on the mathematical idea of
function applications. We have presistently referred to our LISP
procedures as LISP functions, even though we have seen some
differences between the concept of function in mathematics and the concepts
of procedure  in LISP. It is the mathematical idea of function which the
procedures of an applicative language are trying to capture.
Regardless of differences in syntax and evaluation
schemes, the dominant characteristic of an applicative language  is 
that a given "function" applied to a given set of arguments 
%2always%1 produced the same result: either the computation
produces an error, or it doesn't terminate, or it always produces 
a specific value.
The applicative subset of LISP doesn't quite succeed.
The treatement of free variables calls the environment into play.
So in LISP, we must add a phrase about "a given environment", and the point
becomes  that the given computation in the given environment
always has the same effect. That  means we have no way 
to destructively change the environment. Languages which have such
ability are said to have %2side-effects%1 and are the business of the
next chapter.

We have pointed out several "procedural" differences. Our treatment
of conditional expressions differs from the usual treatment of
function application: our standard rule for function application
is "call#by#value" which requires the evaluation of all arguments
before calling the LISP function, whereas  only some of the
arguments to a conditional expression are evaluated. Note that the whole
question of "evaluation of arguments" is a procedural one; arguments
to functions aren't "evaluated" or "not evaluated", they just "are".

We have seen different algorithms computing the same function; for
example %3fact%1 and %3fact%41%1#({yon(P21)}) both compute the factorial
function. If there are different algorithms for computing
factorial, then there are different alorithms for computing
the value of an expression, and %3eval%1 is just one such algorithm.
Just as the essence of %3fact%1 and %3fact%41%1 is the factorial function,
we should capture the essence expressed in %3eval%1.
Put another way, when  applications programmers use %3sqrt%1 or %7p%1
they have a specific mathematical function or constant in mind.
The implementation of the language supplies approximations to these
mathematical  entities, and assuming the computations stay within the
numerical ranges of the machine, the programmers are free to
interpret  any output as that which the mathematical entities would
produce. More importantly the programmers have placed specific interpretations
or meanings on symbols. We are interested in doing the same thing only
we wish to produce a %2freer%1 interpretation, which only mirrors
the essential ingredients of the language constructs. That is,
%3sqrt%1 represents a %2function%1 and %7p%1 represents a %2constant%1.
The %3eval-apply%1 pair gives one interpretation to the meaning of functions
and constants, but there are different interpretations and there are
different %3eval-apply%1 pairs.

The remainder of this section  will resolve some of the tension between
function and procedure. We will study some of the mathematical implications
of applicative languages. First this will be done using the %9λ%1-calculus, 
a formal 
language for studying the concept of function. Then the results of this study
will be applied to the applicative LISP subset.


What does this discussion have to do with 
programming languages? Clearly the order of
evaluation or reduction is directly  applicable. 
Our study will also  give insights 
into the problem of language specification.
Do we say that
the language  specification consists of a syntactic component and some
 description of the evaluation of constructs in that language?
Or do we say that these two components, syntax and a machine, are merely
devices for describing  or formalizing notions about  some abstract  domain of
discourse?  A related question for programmers and language designers
involves the ideas of correctness and equivalence of programs. How do we
know when a program is correct?  This requires some notion of a standard
against which to  test our implementations⊗↓"Unless
there is a prior mathematical definition of a language at hand,
who is to say whether a proposed implementation is %2correct%1?"#⊗↑[Sco#72]↑.←. 
If our algorithms really are  reflections of
 functional properties, then we should develop methods
for verifying that those properties are expressed in our algorithms. Against this
we  must balance the realization that many programs don't fit neatly
into this mathematical framework. Many programs are best characterized
as themselves. In this case we should then be interested in 
verfying equivalence of programs. If we  develop a new algorithm
we have some responsibility to demonstrate that the algorithms are equivalent
in very well defined ways⊗↓Current theory is inadequate for dealing with
 many real programming tasks. However the realization that
one has a responsibility to consider the questions, %2even informally%1,
is a sobering one  which more programmers should experience.←.

The study of formal systems in mathematical logic offers insight.
There, we are presented with a syntax and a system of axioms and rules of inference.
Most usually we are also offered a "model theory" which gives us 
interpretations 
or models for the syntactic formal system; the model theory usually supplies
additional means of giving convincing arguments for the validity of statements
in the formal system.  The arguments made within the formal system
are couched in terms of "provability" whereas arguments of the model theory are
given in terms of "truth". 
For a discussion of formal systems and model theory see ⊗↑[Men#64]↑.

.<<c w mmorris>>
.GROUP;
C. W. Morris (⊗↑[Mor#55]↑) isolated three perspectives on language, syntax, pragmatics,
and semantics. 

.BEGIN INDENT 0,8;GROUP;

%2Syntax%*: The synthesis and analysis of sentences in  a language. This area is
well cultivated in programming language  specification.
.END
.APART;
.BEGIN INDENT 0,11;GROUP;

%2Pragmatics%*: The relation between the language and its user. 
Evaluators, like %3tgmoaf, tgmoafr%1 and %3eval%1, come under 
the heading of pragmatics.
Pragmatics are more commonly referred to as operational semantics in discussions
of programming languages.
.END
.BEGIN INDENT 0,10;GROUP;

%2Semantics%*: The relation between constructs of the language and the abstract
objects which they denote. This subdivision is commonly referred to as denotational
semantics.

.END

Put differently, syntax describes appearance, pragmatics describes
implementation,  and
semantics describes meaning⊗↓This  division of language
reflects an  interesting parallel
between mathematical logic and programming languages: in mathematical logic
we have deduction, computation, and truth; in programming language specification
we have three analogous schools 
of thought: axiomatic, operational, and denotational.
We won't go into details here, but
see ⊗↑[Men#64]↑   for the mathematical logic and ⊗↑[Dav#76]↑ 
for a study of the interrelationships;
see ⊗↑[Hoa#69]↑
for a discussion of the axiomatic school;  we will 
say more about the operational and denotational schools in this
section.←.
Though there is a strong concensus on the syntactic tools for specifying
languages⊗↓But see ⊗↑[Pra#73]↑.←, there is no concensus on 
adequate pragmatics and even  less agreement on the adequancy of
semantic descriptions.
We will first outline the pragmatics questions and then discuss a bit 
more of the semantics issues.
In this discussion we will use the language distinctions of Morris
even though the practice is not commonly followed in the literature.
Typically, syntax is studied  precisely and semantics is "everything else";
however the distinction between computation (pragmatics)  and truth (semantics)
is important and should not be muddled.


One thought is to describe the pragmatics of a language in terms of the process
of compilation.  That is, the pragmatics is specified by some canonical compiler
producing code for some well-defined simple machine.  The meaning of
a program is the outcome of an interpreter interpreting this 
code.
But now,
to understand the meaning of a particular constructs, then this proposal 
requires that you read the description 
of a compiler 
and understand the simple machine.
Two problems arise immediately. Compilers are not particularly 
transparent  programs. Second, a very simple machine  may not adequately
reflect the actual mechanisms used in an implementation. This aspect is
important if the semantic description is to be meaningful to an implementor.

There is a more fundamental difficulty here if we consider the practical aspects
of this proposal.
When asked look understand  a program written in  a high-level language you think  
about the %2behavior%1 of that program in a very direct way. The pragmatics
is close to the semantics.
You think about the compuatational behavior as it executes; you
do not think about the code that gets generated and then think about the
execution of that code.  

A more natural pragmatics for the
constructs is given in terms of the run-time behavior of these constructs.
The %3eval%* function describes the execution sequence of a
representation of an arbitrary LISP expression.  Thus %3eval%* is the semantic
description of LISP.  Such descriptions  have a flavor of circularity which
some find displeasing. However some circularity is inevitable; we must assume that 
something is know and does not require further explication.
If you decide to describe the semantics of language L%41%* in a
simpler language L%42%* then either L%42%* is "self#evident" or you must give a
description of the meaning of L%42%*.


So, realistically, the choice is where to stop, not whether to stop.
The LISP position is that the language and data structures are sufficiently
simple that self-description is satisfactory.
Attempts have been made to give non-circular interpreter-based
descriptions of semantics for languages other than LISP.  There is a
 VDL description of PL/1, and a description  of ALGOL 68
by a Markov algorithm. Both these attempts result in a description
which is long and unmanageable for all but the most presistent.

There are some compelling  reasons for deciding on direct circularity.  First,
you need only learn one language.  If the  specification is given
the source language, then you learn the programming language and the
specification language at the same time.  Second, since the evaluator is
written in the language, we can understand the language by understanding
the workings of the single program, %3eval%*; and if we wish to modify the
pragmatics, we need change only one collection of high-level programs.  
If we
wished to add new language constructs to LISP  we
need only modify %3eval%* so that it recognizes an occurrence of that new
construct,  and add a  function to describe the
interpretation of the construct.
That modification might be extensive,
 but  it will
be a controlled revision  rather than a complete reprogramming effort.

We should mention that there is another common method for specifying 
the pragmatics of a programming language. The original Algol report (⊗↑[Alg#63]↑)
introduced a standard   for syntax specification: the
BNF equation. It also  gave a reasonably precise description of the
pragmatics of Algol statements in natural language.
The style
of presentation was concise and clear, but suffers for the imprecision
of natural language presentation. 
Regardless,
this style  is description is quite common and is very useful. A recent
report#(⊗↑[Moor#75a]↑) on the pragmatics of INTERLISP  used this descriptive style.
If the language is quite complex, then a formal description can be
equally complex. In {yonss(p211)} we will see that our interpreter definition
will extend nicely to richer subsets of LISP.


A language description should not attempt to explain everything about a language.
It need only explain what needs explaining.
You must assume that your reader understands something ... . McCarthy: %2`Nothing
can be explained to a stone'%*#⊗↑[McC#66]↑.
A  description of a language must be natural and must match the expressive
power of the language. That is, it  should  model how the constructs
are to be implemented.   What is needed is a description
 which exploits, rather than ignores, the structure of the language.
Mathematical notation is no substitute for clear thought,
but we believe careful formulations of semantics will lead to additional
insights in  the pragmatics of language 
design ⊗↓R. D. Tennent has invoked this approach in
the design of %3QUEST%*#(⊗↑[Ten#76]↑).←.  The task requires new mathematical tools
to model the language constructs, and requires increased care on the part of the
language designer.

Let's look at  the issue of syntax for a moment.  
A satisfactory description of much of
programming language syntax is standard BNF.  The BNF is a generative, or synthetic
grammar.  That is, the rewriting rules specifying how to %2generate%1  well formed
strings.  An evaluator, on the other hand,
takes a program representation as input and gives as output
a representation of the value of executing the program.  This implies that
our evaluator is %2analytic%1 
rather than synthetic; it must have some way of analyzing the structure of 
the program.


.<<abstract syntax>>
.BEGIN GROUP;
.P75:
In ⊗↑[McC#62]↑, John McCarthy introduced the concept of abstract 
⊗→analytic syntax↔←. The idea is directly derivable from the LISP
experience. The syntax
is analytic, rather than synthetic, in the sense that it tells how 
to take programs apart#--#how to recognize
and select subparts of programs using predicates and selector functions⊗↓We will
deal with abstract %2synthetic%1 syntax when we discuss compilers.←.
It is abstract
in the sense that it makes no commitment to the external representation of the
constitutents of the program.  We need only be able to recognize  the occurrence
of a desired construct. For example:
.TURN ON "←";

.BEGIN GROUP;SELECT 3;TABIT1(16);

isterm <= λ[[t] or[\isvar[t];
\isconst[t];
\and[issum[t];isterm[addend[t]];isterm[augend[t]]]]]
.END

%1and the BNF equation:

←<term> ::= <var> | <const> | <term> + <term>.

%3issum, addend,%1 and %3augend%1, don't really care whether the sum is 
represented as 
x+y, or %3+[x;y]%1 or %3(PLUS#X#Y)%* or %2xy+%1.  
The different external representations
are reflections of different ⊗→concrete syntax↔←es; the above BNF equations give one.
It is parsing which links a concrete syntax with the abstract syntax.
.END

Since  the evaluator must 
operate on %2some%1 internal representation of the source program, 
a representation reflecting the structure of the program (commonly known
as a parse tree) is most natural. 
We can describe the evaluation of a program in terms of a function
which operates on a parse tree using the predicates and selectors of the
analytic syntax. Abstract syntax concerns itself only with those properties
of a program which are of interest to an evaluator.

.GROUP;
The Meta expression  LISP  constitutes a concrete syntax.
The M-to-S-expression translator is the parser which maps
the external representation onto a parse (or computational) tree. 
The selectors
and predicates of the analytic syntax are straightforward. Recall the BNF for
LISP:

.BEGIN TABIT1(11);GROUP;

<form>\::= <constant>
\::= <variable>
\::=<function>[<arg>; ... ;<arg>]
\::= [<form> → <form>; ... ;<form> → <form>]
\  ....

.END
.APART
Our evaluator needs to recognize constants (%3isconst%*),
variables (%3isvar%*), conditional expressions (%3iscond%*), and applications
(%3isfun+args%*).
We  need to write a function in some language expressing the values
of these constructs. Since the proposed evaluator is to manipulate parse
trees, and since the domain of LISP functions %2is%* binary trees, it should seem
natural to use LISP itself.  If this is the case, then we must represent the parse
tree as a LISP S-expr and represent the selectors and  recognizers as LISP functions and
predicates.

.BEGIN SELECT 3;TABIT1(16);GROUP;
%1Perhaps:%3

isconst <= λ[[x] or[\numberp[x];
\eq[x;NIL];
\eq[x;T];
\and[not[atom[x]];eq[first[x];QUOTE]

isvar <= λ[[x] and[atom[x]; not[isconst[x]]]]

iscond <= λ[[x] eq[first[x];COND]]

.END

There are really two issues here: %2a%* representation of the analytic syntax of a language,
and a representation in terms of the language %2itself%1.  
In conjunction, these two
ideas give a natural and very powerful means of specifying languages.

If this style of specification %2is%* really effective,
then we should be able to 
specify other languages  in a similar fashion.
What are the weak points of LISP as
`real' programming language?  Mainly the insistence on binary tree representations
of data.  Many applications could profit from other
data representations.  What we would then like is a language which has richer
data structures than LISP but which is designed to allow LISP-style semantic specification.
That is, we would be able to give an analytic syntactic specification for the language.
We would be able to write an evaluator, albeit more complex than %3eval%*, in the language
itself.  The evaluator would operate on a representation of the program as a 
data structure of the language, just as %3eval%* operates on the S-expr translation
of the LISP program.  The concrete syntax would be specified as a set of BNF equations,
and our parser would translate legal strings -- programs-- into parse trees.

.BEGIN TABIT1(30);GROUP;
In outline, to  specify a construct  we must have at least the following:

\%21.%*  A concrete production.
\%22.%*  An abstract data type.
\%23%*.  A mapping from %21%* to %22%*.
\%24.%*  An evaluator for the abstract type.

.END
In {yonsec(P256)} we will sketch a recent LISP-like language, EL1, which
%2does%1 follow these rules⊗↓Compare  steps  %21%1 through %24%1 with those on 
{yon(P116)}.←.

So, from a discussion of syntax we have gravitated back to evaluation.
After we reduce the questions of syntax of programming languages to questions
of abstract systax and strip away many of the irrelevant syntactic differences,
how many  %2real%1 diferences between languages are there?
Semantics addresses this issue.



.<<begin lambda>>
Many of the ideas approximated in applicative programming
languages can be captured in %9λ%1-calculus (⊗↑[Chu#41]↑). 
Therefore we begin
with a brief discussion of the  %9λ%1-calculus and compare it with
the notions present in LISP. 
Our presentation will be quite brief and informal and the text will refer
 the interested reader to  more complete treatments.
In the interest of readability, we will
write %9λ%1-calculus expressions in a Gothic bold-face type; e.g. %9λ%* and %dx%*
instead of LISP's  %3λ%* and %3x%*.

The syntax of  well-formed  %9λ%*-expressions is quite simple:

.BEGIN TABIT1(11);GROUP;
.P252:
<wfe>	\::= <function>
\::= <applic>
\::= <variable>
<function>\::= %9λ%* <variable> %d.%* <wfe>
<applic>\::= %d(%* <wfe> %d.%1<wfe> %d)%*
<variable>\::= %da%1 | %db%1 | ... %dz%1
.END

For example: %d(((%9λ%dx.(%9λ%dy.y))a) b)%1 is a <wfe>. Notice that the
class of <wfe> only covers the unary functions. That is no loss of generality,
since an n-ary function can be expressed as a unary function whose
value is an n-1#ary function. Also it is apparent that the parenthesizing
of %9λ%1-expressions hampers readability. 
Therefore we introduce some abbreviational conveniences. 
Upper case gothic letters, like %dM%1 and %dN%1, will be used as meta-variables
ranging over well-formed expressions.
We will also assume that
parentheses associate to the left; the application %d((M#N)O)%1 can be written
%d(M#N#O)%1 or even %d(MNO)%1. Also
the scope of %d.%1 will extend as far to the right as possible, allowing
us to write %d(%9λ%dx.(%9λ%dy.%DM%d))%1 as
%d(%9λ%dx.%9λ%dy.%dM%d)%1. Recalling our first 
observation, we can  abbreviate this as
%d(%9λ%dxy.%dM%d)%1.
Therefore %d(((%9λ%dx.(%9λ%dy.y))a) b)%1 can  be written as:
%d((%9λ%dx.(%9λ%dy.y))a b)%1.  This %9λ%1 expression represents the application
of a binary function to  its two arguments; the value of that function is its
second argument. This is more apparent after  our reduction to
%d((%9λ%dxy.y) a b)%1. 

.GROUP;
.P254:
We need a few definitions.
A variable, %Dx%*, is  a %2⊗→free variable↔←%*⊗↓Compare
this definition of free with that in {yonss(P135)}.← in a <wfe>, %dE%* if:

.BEGIN INDENT 0,10;GROUP;
%dE%* is the variable, %dx%*.

%dE%* is an application, %d(OP A)%*, and %dx%* is free in %dOP%* or %dx%1 is free in %dA%*.

%dE%* is a %9λ%*-expression, %9λ%dy.M%1, if %dx%* is free in %dM%* and %dx%* is
distinct from %dy%*.

.END
.APART
Thus %da%1 is free in %d(((%9λ%dx.(%9λ%dy.y))a) b)%1.

A variable is a %2⊗→bound variable↔←%* in %dE%*
if it occurs in %dE%* and is not free in %dE%*.
For example, %dy%1 is bound in %d(((%9λ%dx.(%9λ%dy.y))a) b)%1.

This describes the syntax of the language. 

We need rules (of inference) for manipulating the
objects of the language. Since the intent of the calculus is to mirror the
essence of functionality and function application, the rules should
capture those aspects. The rules of the %9λ%1-calculus are 
called %2conversion#rules%1.
The intent is that the meaning of a <wfe> is not changed by an application of one
the conversion rules.
The presentation of the conversion rules is simplified after establishing
one further definition.
A %2context%1 is a <wfe> with one subexpression missing.
We use the notation: %dE[#]%* for a context. For example if
%dE[#]#=#%d(%9λ%dx.(%9λ%dy.[#]))%1, then %dE[(%9λ%dz.(%9λ%du.z))]%1 is
%d(%9λ%dx.(%9λ%dy.
(%9λ%dz.(%9λ%du.z))
))%1. 

.GROUP;
There are two %9λ%*-conversion rules which are of interest here:
.BEGIN INDENT 0,12;

%9α%*-conversion: %dE[%9λ%*x.M]%1 may be converted to %dE[%9λ%*y.M']%1 if %dy%* is not
free in %dM%* and %dM'%* results from %dM%* by replacing every free occurrence
of %dx%* by %dy%*. %9α%*-conversion  allows alphabetic change of free variables.

.END
For example: %d(%9λ%dx.xx)%1 %7a%1-converts to %d(%9λ%dy.yy)%1.

.APART;
.GROUP;
.BEGIN INDENT 0,12;

%9β%1-reduction: %dE[(%9λ%*x.M N)]%1 %9β%*-reducible if no variable  occurring 
free in %dN%*
is bound in %dM%*. %dE[(%9λ%*x.M#N)]%1 is reducible to %dE[M']%* where %dM'%* results
from the replacement of all free occurrences of %dx%* in %dM%* by %dN%*.
.END
For example, %d(((%9λ%dx.(%9λ%dy.y))a) b)%1 %7b%1-reduces to:
%d((%9λ%dy.y) b)%1  and %7b%1-reduces to: %db%1.
.APART;

Rules  for equality, complete the specification of the transformations.
Thes rules include substitutivity: in particular, if %dM%d#=#%dN%1 then
%dE[M]#=#E[N]%1. 

We can  include other special  rules in a %9λ%1-calculus. A typical
class are called
%9∂%1-reduction rules. These are used as simplification rules and 
play the role of
constants  in a %9λ%*-language. Each rule has the basic interpretation:
"%dE%* may always be replaced by %dE'%*." The constants and %9∂%*-reduction
rules are used to construct a %9λ%*-calculus for a specific domain of interest.

.P261:
The %9λ%1-calculus was invented to  formalize the
properties of functionality and function application.
The %7a%1 rule formalizes the notion that a function is unchanged
if we change its formal parameters. This 
property is related to %2⊗→referential transparency↔←%1.
In more detail, a language possesses
referential transparency if the value of an expression which contains subexpressions
depends only on the values of those subexpressions. For example, if the value
of an expression changed because we changed the name of a variable in
one of its subexpressions, then that would violate referential transparency.
This means LISP is %2not%1 a referentially transparent language; 
for a fixed environment, the value
of %3λ[[x]#cons[x;y]][A]%1 
is the same as %3λ[[z]#cons[x;y]][A]%1, but need not  
be the same as %3λ[[x]#cons[x;z]][A]%1.
The difficulty again is the treatment of free variables: dynamic binding
violates referentially transparency. The %9λ%1-calculus is statically bound
and %2does%1 possess referential transparency. Referential transparency
is not simply a worthwhile theoretical property; its corollary, static binding,
is a very useful practical property. We will discuss some
implications of static binding in {yonss(P260)}⊗↓There have been
recent investigations (⊗↑[Hew#76]↑, ⊗↑[Sus#75]↑) of statically bound LISP-like
languages.←.


The %7b%1-rule expresses the intent of function application.
We would then expect that a model of the %9λ%1-calculus would have
a domain consisting of functions; and require that the %7b%1-rule be 
reflected in the model as function application. The equality preserving properties
of the %7a%1 and %7b%1 rules
would require that 
if %d(f#a)#=#%d(g#a)%1 in the model, then any %7a%1 or %7b%1 manipulations 
of  those expressions will not affect that equality.

.BEGIN GROUP;
The language doesn't look particularly rich, similar in power to LISP
with just function application but without conditional expressions.
However that is only an illusion; for example we can represent conditional
expressions as follows:
Recall that %d((%9λ%dxy.y)a#b)%1  reduces to %db%1.
Similarly %d(%9λ%dxy.x)a#b)%1 will reduce to %da%1.
If we define, through convention or %9∂%1-rule,
%d(%9λ%dxy.y)%1 to be %ef%1 and
%d(%9λ%dxy.x)%1 to be %et%1, and establish a class, %7P%1, of %9λ%1-expressions
which always reduce to %et%1 or %ef%1 then a %9λ%1-expression:
 %d(P#A#B)%1, for %dp#%7e%1#%7P%1, can play the role of
%3if[p;a;b]%1. We can also demonstrate a %9λ%1-term which has an effect
approximated by %3label%1. See the problem on {yon(P271)}.

.END
The conversion rules have no restrictions concerning an order of application.
If the hypotheses for a rule is satisfied, then it may be applied.
That is as it should be.  In the reduction of
a %9λ%1-expression there may be several reductions possible at any one time.
If we design a %9λ%1-calculus machine, it might be specified with a preferred
order. The machine reflects "%2procedure%1"; the calculus reflects "%2function%1".


There are two common strategies for chosing a reduction step.
Applicative order corresponds to call-by-value, reducing the argument
before reducing the function. Normal order reduces the function application
first; this corresponds to call-by-name. As we know from LISP, the order of 
evaluation can influence the termination of a computation. That observation
holds for the %9λ%1-calculus. To pursue what that means
requires a bit more terminology.
We will say that a  %9λ%*-expression is in 
%2normal form%* if
it contains no expression reducible by  the %9β%* rule.
Not all expressions have normal forms: let %7D%1 name 
%d((%9λ%dx.(xx))(%9λ%dy.(yy)))%1; %7D%1 does not have a normal form.
Every %7b%1 transformation of %7D%1 produces a new expression which is
also %7b%1 reducible.
Not all reduction sequences yield a normal form: 
when %d((%9λ%dx.y)#%7D%d)%1 is  reduced in normal order a normal form
results, whereas
applicative order will not yield a normal form.

The  application of the reduction rules, say
in either applicative or normal order, can be considered a
computation scheme. The process of reducing an expression is the computation,
and a computation terminates if that reduction produces a normal form.
With this interpretation, some computations terminate  and
some don't. A <wfe> has a normal form just in the case  that a reduction sequence 
terminates.  The computational interpretation of the %9λ%1-calculus rules can
be extended to the development of %9λ%1-calculus machines (⊗↑[Lan#64]↑).
The %9λ%1-calculus machines can simulate the reduction rules in several ways;
since the rules make no commitment for order of application, that choice is open.
Also, the reduction rules are described in terms of substitutions; a machine
might simulate this facet directly or might employ a mechanism like the
LISP symbol table.

This discussion  suggests some interesting problems.
First, 
there may well be two or more sequences of reductions for a %9λ%*-expression;
assuming they terminate,
is there any reason to believe that these reduction sequences will yield 
the same normal forms?  
Second, if we have two %9λ%*-expressions which reduce
to distinct normal forms, is there any reason to believe that they are
"inherently different" %9λ%*-expressions?


The first question is easier to answer.
If both reduction sequences terminate then they
result in the same normal form. This is  called the Church-Rosser
theorem. 

The second question  requires some explanation of "inherently different".
At one level we might say that by definition, expressions with different
normal forms are "inherently different". 
But  thinking of %9λ%*-expressions as functions,
to  say two %9λ%*-expressions are "different" is to say we can exhibit arguments
such that the value of application is
not equal to the value of the other function application⊗↓If two functions
satisfy this then they are said to be extensionally equal.←. 
 C.#Boehm has established this for %9λ%*-expressions
which have normal forms.

However the situation changes when we examine
%9λ%*-expressions which do   not have normal forms.
Recalling the intuitive  relationship between non-termination and "no normal form",
we might expect that
all  expressions without normal form "equivalent". However, it can be shown
that such an identification would lead to contradictions.
We might also expect
 that %9λ%1 expressions without normal forms are "different" from
expression which have normal form.   This is  also 
not true;  ⊗↑[Wad#71]↑ exhibits
two expressions, %dI%1 and %dJ%1  with and without normal form, respectively.
However these two expressions are the "same" in the sense that
3 and 2.99999#... are the "same". That is, %dJ%1 is the limit of a sequence
of approximations to %dI%1.
Also, we can exhibit two  %9λ%*-expressions,
 %dY%41%1 and %dY%42%1, both without normal form, 
 which are distinct
in that no reduction sequence will reduce one to the other; but our
intuition says that they are "the same function" in the sense that, for any
argument, %da%* we supply,  both reductions result in the same expression.
That is %d(Y%41%d#a)#=#(Y%42%d#a)%1⊗↓Note that 
%d(Y%4i%d#a)%1  may have a normal form even though
 %dY%4i%1 does not.←.


The reduction rules of the %9λ%*-calculus cannot help us.
The resolution of  the idea of "same-ness" requires stronger analysis⊗↓The 
interpretation of "same-ness" which we have been using is  that of extensional
equality. That is, two functions are considered the same function if
no differences can be detected under application of the functions to any arguments.←
We can
give an interpretation to the %9λ%*-calculus such that in 
that interpretation the 
pairs %dI%1 and %dJ%1, or %dY%41%1 and %dY%42%1,
have the same meaning. This should be a convincing
argument for maintaining that they are the "same function" even though
the reduction rules are %2not%* sufficient to display that equivalence 
⊗↓This demonstration also gives credence to the position that the
meaning  transcends the reduction rules.  Compare the incompleteness results
of K. Godel.←.
 D. Scott %2has%* exhibited a model or interpretation of
the %9λ%*-calculus, and D. Park has shown the equivalence in this model
of %dY%41%1 and %dY%42%1, and C.#Wadsworth as shown the equivalence of
%dI%1 and %dJ%1.

As we said at the beginning of the section, this calculus was 
intended to explicate the idea of "function" and "function application".
There is a reasonably subtle distinction between Church's conception
of a function as a rule of correspondence, and the usual set-theoretic
view of a function as a set of ordered pairs.
In the latter setting we rather naturally think of the elements of the domain
and range of a function as existing %2prior%* to the construction of the function:
.BEGIN TURN OFF "{","}";
"Let %3f%* be the function {<x,1>, <y,2>, ...}".
.END
Thus in this frame of mind we do not think of functions which can take
themselves as arguments: %3f[f]%*. Such functions are called %2⊗→self-applicative↔←%*.
Several languages, including LISP, allow the procedural analog of
self applicative functions. They are an instance  of functional arguments#({yonss(P76)}).
The LISP function discussed in the problem on {yon(P99)} is an instance
of a self-applicative LISP function.
Since we can deal with self-application as a procedural concept
at least,    perhaps some of this  understanding will help
with the mathematical questions. Again, we turn to the %9λ%1-calculus.

The calculus is an appropriate tool for studying self-application and
function-values since syntax allows such constructs. Indeed %2everything%1
in the calculus is a representation of a function. Compare this
with the condition in LISP when we think of the S-expression
representation of language as the language itself. Then the distinction
between program  and data disappears, just as it does in the %9λ%1-calculus.  
The conversion rules of the calculus allow a %9λ%*-expression to be
applied to any  %9λ%*-expression including the expression 
%2itself%*⊗↓There are  logical difficulties,
like Russell's paradox, which arise if we allow
unrestricted self-application.←.

.GROUP;
A similar situation exists in the programming language LISP.
We can evaluate expressions like:

%3((LAMBDA (X) %9x%3) (LAMBDA (X) %9x%3))%1
.APART

.<<lisp borrows>>
LISP has
borrowed heavily, but informally, from mathematics and logic.
It
uses the language of functions and functional composition, but  
  the usual
interpretation  of LISP expressions is procedural.
We want to relate that procedural knowledge to the formalism
which we have been developing. 

Most of the description of LISP which we have given so far is classified as ⊗→operational↔←.
That means our informal description of LISP and our later description of the LISP
interpreter are couched in terms of "how does it work  or operate". Indeed
the purpose of %3eval%* was to describe explicitly what     happens
when a LISP expression is  evaluated.  We have seen ({yon(P84)}) that discussion
of evaluation schemes is non-trivial; and that order of evaluation can effect the
outcome ({yon(P17)}).  
.BEGIN TURN ON "#";
.P91:
However, many times the order of evaluation is immaterial
⊗↓"One difficulty with the operational approach is that it (frequently)
specifies too much": C. Wadsworth.←. 
We saw on {yon(P93)} that %3eval%* will perform without complaint when
given a form %3f[#...#]%* supplied with too many arguments.
How much of %3eval%* is "really" LISP and how much is "really" implementation?
On one hand we have said that the meaning of a LISP expression is
by#definition  what %3eval%* will 
calculate using the representation of the expression. On the other hand,
we claim  that %3eval%* is simply %2an implementation%*.
There are certainly other implementations; i.e, other LISP functions %3eval%4i%1
which have the same input-output behavior as our %3eval%*. 
The position here is not that %3eval%* is wrong in giving
values to things like %3cons[A;B;C]%*, but rather: just what is it that
%3eval%* implements?

This other way of looking at meaning in programming languages is exemplified by
⊗→denotational↔← or mathematical semantics.
.P96:
This perspective springs from a common mathematical, philosophical, or logical
device of distinguishing between a %2representation%* for an object, and the
object itself. The most usual guise is the numeral-number distinction.
Numerals are notations (syntax) for talking about %2numbers%* (semantics).
thus the Arabic %2numerals%* %32, 02%*, the Roman numeral %dII%*,
and the Binary notation %310%*, all %2denote%* or represent
the %2number%* two. In LISP, %3(A#B), (A#.#(B)), (A#,#B)%* and %3(A#.#(B#.#NIL))%*
all are notations for the same S-expr.
We want to say that a LISP form 
%3car[(A#.#B)]%1 %2denotes%* %3A%1, or %3car[A]%1 denotes undefined  just
as we say in mathematics that 2+2 denotes 4 or 1/0 is undefined. 

Similarly, we will say that
the denotational counterpart of a LISP function  is just a 
 mathematical  function  defined over our abstract domain.
Before  proceeding, we introduce some conventions
to distinguish notation from %2de%*notation. 
.BEGIN GROUP;
We will continue to use italics:
.BEGIN CENTER;

(%3A, B, ..., x, ... car, ...(A . B) %*) as notation in LISP expressions.
.END
Gothic bold-face:
.BEGIN CENTER;

(%dA, B, ..., x, ...car, ...(A . B)%*) will represent denotations.
.END
.END

Thus %3A%* is notation for %dA%*;
%3car[cdr[(A#B)]]%* denotes %dB%*; the mapping, %dcar%* is the denotation
of the LISP function %3car%*.


The operation of composition of LISP functions tries to denote 
mathematical composition;
thus in LISP, %3car[cdr[(A#B)]]%*  means apply the procedure %3cdr%* to the
argument %3(A#B)%* getting %3(B)%*; apply the procedure %3car%* to %3(B)%*
getting %3B%*. Mathematically speaking, we have a mapping, %dcar%fo%*cdr%1
which is a composition of the %dcar%* and %dcdr%* mappings; the ordered
pair %d<(A#B)#,#B>%* is in the graph of this composed mapping.
%dcar%fo%*cdr(#(A#B)#)%1 is %dB%*.
In this setting, any LISP characterization of a function is equally good;
the "efficiency" question has been abstracted
away. But many important properties of real programs %2can%*  be 
discussed in this  mathematical context; 
in particular, questions of equivalence
and correctness of programs are more approachable.

We should point out that
denotational thinking %2has%* been introduced before.
When we said ({yon(P86)}) that:

.BEGIN CENTERIT;SELECT 3;
←f[a%41%*; ... a%4n%*] = eval[(F A%41%* ... A%4n%*)],
.END;
we are 
relating a denotational notion with an operational
notion. The left hand side of this equation
is denotational; it expresses a functional relationship. 
The right hand side is operational, expressing a procedure call.
This denotational-operational distinction is appropriate in a more general context.
When we are presented with someone's program and asked "what does it compute?"
we usually begin our investigation operationally, discovering "what does it 
%2do%*?"⊗↓Another 
common manifestation of this "denotation" phenomonon is the common
programmer complaint: "It's easier to write your own than to understand
someone else's."←.
Frequently, by tracing its execution, we can discover a denotational  description:
E.g.,#"ah!#it#computes#the#square#root".
.END

When %3great mother%* was presented it was given as an operational exercise,
with the primary intent of introducing the LISP evaluation process without
involving  complicated names and concepts. Forms involving %3great mother%* were 
evaluated perhaps without understanding the denotation, but if asked "what does
%3great mother%* do?" you would be hard pressed to given a comprehensible
purely operational definition. However, you might have discovered the intended
nature of %3great mother%* yourself; then it would be relatively easy to describe
its (her) behavior. Indeed, once the denotation of %3great mother%* has been discovered
questions like "What is the value of %3tgmoaf[(CAR#(QUOTE#(A#.#B)))]%*? " 
are usually
answered by using the denotation of %3tgmoaf%*: "what is 
the value of %3car[(A#.#B)]%*?"
The question of how one gives a convincing argument that 
%3eval%* %2does%1 faithfully represent LISP evaluation is the
subject of ⊗↑[Gor#73]↑.


In discussing models for LISP,
we will parallel  our development of interpreters for LISP since each subset,
%3tgmoaf, tgmoafr%*, and %3eval%*,
will also introduce new problems for our mathematical semantics.

Our first LISP subset considers functions, compostion, and constants.
Constants will be elements of our domain of interpretation.
Our domain will include
the S-expressions since %2most%* LISP expressions denote S-exprs; since
 many of our LISP functions are partial functions, 
it is convenient to talk about the undefined value, %9B%1. We
wish to extend our partial functions  to be %2total%* functions on
an extended domain.
As before ({yon(P251)}),
we shall call this extended domain %aS%1.

Before we can talk very precisely about the  properties 
mathematical functions denoted by LISP functions,
we must give more careful study to the nature of domains.
Our primitive domain 
is  %d<atom>%*.
Its intuitive structure is quite simple, basically just a set of atoms
or names with no inherent relationships  among the elements.
Another primitive domains is %aTr%1, the domain of truth values.

The domain %d<sexpr>%*   is more complex; it is  a set of elements, but many
elements are  related. 
In our discussion of %d<sexpr>%1  on {yon(P47)}
we made  it clear that there is more than syntax involved.
We could say that
for %ds%41%1 and %ds%42%1 in %d<sexpr>%* then the essence of "dotted pair"
is contained in the concept of the set-theoretic ordered pair, 
<%ds%41%1,%ds%42%1>. Thus the "meaning" of  the set of dotted pairs is
captured by Cartesian product, %d<sexpr>#%ax%*#<sexpr>%1.

.GROUP;
Let's continue the analysis of:

.BEGIN CENTERIT;
←<sexpr> ::= <atom> | (<sexpr> . <sexpr>).
.END
.APART

We are trying to interpret this BNF equation as a definition of the
domain %d<sexpr>%*. Reasonable interpretations of "::=" and "|" appear to
be equality and set-theoretic union, respectively. This results in the
equation:

.BEGIN CENTERIT;SELECT d;
←<sepxr> = <atom> %a∪%* <sexpr> %ax%* <sexpr> %1.
.END

This looks like an algebraic equation, and as such,
may or may not have  solutions.
This particular "domain equation" has a solution: the S-exprs.

There is a natural mapping of BNF equations onto such "domain equations",
and the solutions to the domain equations are sets satisfying 
the abstract essence of the BNF. 

Now consider the following BNF:
.BEGIN CENTERIT;
←<wfe> ::= <variable> | %9λ%*<variable>%d.%*<wfe> |  %d(%*<wfe> <wfe>%d)%*
.END
This is an abbreviated form of the BNF for  the %9λ%1-calculus.
We would like to describe the denotations of these equations in a
style similar to that used for <sexpr>'s.
The  denotations of the expressions, <wfe>,
 of applications, %d(%1<wfe>#<wfe>%d)%1, and of the  variables,
<variables>, are just constants of the language; call this domain %dC%*.

Expressions of the form "%9λ%1<variable>%d.%1<wfe>" are supposed to represent
functions. First we consider 
 the set of functions from %dC%* to %dC%*. Write that set as
%dC#→#C%*. Then our domain equation is expressed:

.BEGIN CENTERIT;SELECT d;

←C = C→C%a∪%dC
.END
This equation has
no %2interesting%1 solutions. A simple counting argument will establish that 
unless the domain %dC%* is a single element, then the number of functions
in %dC#→#C%* is greater than the number of elements in %dC%*.
This does %2not%1 say that there are no models of the %9λ%1-calculus.
It  says is that our interpretation of "%d→%*"
is too broad. 

What is needed is an interpretation of functionality
which will allow a solution to the above domain equation; it
should allow a natural interpretation such that the properties which
we expect functions to possess are  true in the model.
Scott gave one such 
interpretation of "%d→%*", defining
 the class of "continuous functions".
This class of functions is restricted enough to satisfy the domain equation,
  but broad enough to act as the denotations of procedures in
applicative programming languages.
We will use the notation  "%d[%1D%41%1#%d→%1#D%42%d]%1" to mean "the set of
continuous functions from domain D%41%1 to domain D%42%1".
It is the continuous functions which first supplied a model
 for the %9λ%1-calculus and it is these functions which 
 supply a basis for a mathematical model of LISP.

In  particular we can assume that the LISP primitives
denote specific continuous functions. For example, the
mathematical counterpart to the LISP function %3car%* is the mapping %dcar%* from 
%aS%* to %aS%* defined as follows:

.BEGIN TABIT2(10,20);GROUP


\%dcar: [%aS%d → %*S%d]%1

\\ %1| is %9B%* if %dt%* is atomic
\%dcar(t)\%1< %dt%41%1 if %dt%* is %d(t%41%* . t%42%*)
\\ %1| is %9B%* if %dt%* is %9B%*.

.END
Similar strategy suffices to give denotations for the other primitive LISP functions
and predicates. For example: 

.BEGIN TABIT2(10,20);GROUP


\%datom: [%aS%d → %*S%d]%1

\\ %1| is %ef%* if %dt%* is not atomic.
\%datom(t)\%1< is %et%* if %dt%* is atomic.
\\ %1| is %9B%* if %dt%* is %9B%*.

.END
Notice   that these  functions are strict: %df(%9B%d) = %9B%1.



Corresponding to %3tgmoaf%*, we will have a function, %9D%4tg%1, 
which maps expressions
onto their denotations.
Since  %9D%4tg%1 is  another mapping like  %9r%1, we will
use the  "%f(%1" and "%f)%1" brackets to enclose LISP constructs.
We need to introduce some notation for
elements of the sets <sexpr> and <form>. Let %as%* range over <sexpr>
and %af%* range over <form>, then 
we can write:

.BEGIN centerit;GROUP;

←%9D%4tg%f(%as%f)%1 = %ds%*
←%9D%4tg%f(%3car[%af%*]%f)%1 = %dcar%*(%9D%4tg%f(%af%f)%1)
.END

with similar entries for %3cdr, cons, eq, %1and %*atom%*.
The structure of this definition is very similar to that of %3tgmoaf%1.

Now we continue to   the next subset of LISP, adding conditional
expressions to our discussion. As we noted on {yon(P88)}, a degree of care need
be taken when we attempt to interpret conditional expressions in terms of mappings.
We can simplify the problem slightly: it is easy to show that the
 conditional expression can be formulated in terms of the  simple 
%3if%1-expression:
%3if[p%41%*;e%41%*;e%42%*]%1.
We will display a denotation for such %3if%1 expressions. It
will be a mathematical function, and therefore the evaluation order will have been
abstracted out⊗↓%1Recall the comment of Wadsworth
({yon(P91)}). Indeed, the use of conditional expressions in the more
abstract representations of LISP functions frequently is such that 
exactly one of the p%4i%*'s is %et%* and all the others are %ef%*.
Thus in this setting, the order of evaluation of the predicates is useful 
for "efficiency" but not necessary to maintain the sense of the definition.
See {yon(P101)}.←.

.BEGIN TABIT2(10,22);GROUP
Let %3if%1 denote %dif%1 where:

\%dif: [%aTr%*x%aS%*x%aS%* → %aS%*]

\\ %1| is%* y %1if%* x %1is%* %et%*
\%dif(x,y,z)\%1< is %dz%1, if %dx%1 is %ef%*.
\\ %1| is %9B%1, otherwise
.END
.GROUP;

This interpretation of conditional expressions is appropriate for LISP; other
interpretations of conditionals are possible. For example:

.BEGIN TABIT2(10,24);GROUP

\%dif%41%*: [%aTr%*x%aS%*x%aS%* → %aS%*]

\\ %1| is%* y %1if%* x %1is%* %et%*
\%dif%41%*(x,y,z)\%1< is %dz%1, if %dx%1 is %ef%*.
\\ %1| is %9B%1 if %dx%* is %9B%* and %dy ≠ z%*
\\ %1| is %dy%1 if %dx%* is %9B%* and %dy = z%*     ⊗↓%1Basing conditional 
expressions on %dif%41%1 would give %3[car[A]#→#1;# %et%3#→#1]%1
 value %31%1.←.
.END
Neither %dif%* nor %dif%41%1 are strict mappings.


.APART

.GROUP;

To add %3if%1 expressions to %9D%4tg%1, yielding %9D%4tgr%1
we include:
.BEGIN TABIT1(12);FILL;

\%9D%4tgr%f(%3if[%af%41%3 ; %af%42%3; %af%43%3]%f)%1 =
%dif(%9D%4tgr%f(%af%41%f)%d,%9D%4tgr%f(%af%42%f)%d,%9D%4tgr%f(%af%43%f)%d)%1
.END
.APART


The next consideration is the denotational description of LISP identifiers.
Identifiers name either S-exprs or LISP functions.
Thus
an identifier denotes either an object on our domain %aS%1 or denotes a function
object.
Let %aFn%* name the set of continuous functions:#%9S%4n=0%d[%aS%8n%d#→#%aS%d]%1,
and %aId%1 be %d<identifier>%1∪%9B%1.
We know that the value of a LISP <identifier> ({yon(P66)})  depends on the 
current environment. 
Then we might characterize  the set of environments, %denv%1, as:
.BEGIN CENTER

%d[%aId%1 → %aS%1 ∪ %aFn%1%d]%1.
.END
That is, an element of %denv%* is a continuous function which maps an identifier
either onto a S-expr or onto an n-ary function from S-exprs to S-exprs.
This is the essence of the argument used in introducing %3assoc%* ({yonss(P92)}).
Note  that %3assoc[x;l]#=#%dl(x)%1 is another instance of a 
operational-denotational relationship.

.BEGIN TURN OFF "{","}";
Given a LISP identifier, %3x%*, and a member of %denv%*, say 
the function %d{<x,2>,<y,4>}%*, then the new
%9D%* should map %3x%* onto %d2%*. This is an intuitive way of saying
that %9D%* should map a member of <identifier> onto a %2function%*. This function
will map a member of %denv%* onto an element of %aS%*.
Introducing %ai%* as a meta-variable to range over <indentifier>,
then for %dl#%9ε%d#env%1 we have:

.CENTER;
%9D%f(%ai%f)%d(l)%1#=#%dl(i)%1
.END


The %2meaning%* or denotation of a identifier is a function;
whereas the %2value%* of an identifier is an element of %aS%1∪%aFn%1.

The treatment of identifiers leads directly into the
denotional aspects of function application.
We shall maintain the parallels between evaluation and denotation, by giving
%9D%4e%1 and %9D%4a%1 corresponding to %3eval%* and %3apply%*.
Let %ag%1  be a memeber of <function> and %af%1 be a member of <form>, then
for a given element of %denv%1, %9D%4a%1 maps %ag%1 onto an element of
%aFn%1, and %9D%4e%1 maps %af%1 onto an element of %aS%1.

.BEGIN CENTERit;
For example: ←%9D%4a%F(%3car%f)%d(l)%1 = %dcar%1
.END

.GROUP;
Similar equations hold for the other LISP primitive functions and predicates.
In general, then:
.BEGIN CENTER;

%9D%4a%f(%af%f)%d(l)%1 = %dl(f)%*, where %af%* %9ε %1<function>.
.END
.APART
To describe the evaluation of a function-call in LISP we must add
an equation to %9D%4e%1:
.BEGIN  TABIT1(15);FILL;TURN ON "#";

\%9D%4e%f(%af%1[%as%41%1,#...,%as%4n%1]%f)%d(l)%1#=#
%9D%4a%f(%af%f)%d(l)(%9D%4e%f(%as%41%f)%d(l)%1,#...,%9D%4e%f(%as%4n%f)%d(l))%1
.END
We must also make consistent modifications to the previous clauses of %9D%4tgr%1 to
account for environments.
That is, the value of a constant is independent of the
specific environment in which it is evaluated. 
.BEGIN TURN OFF "{","}";TURN ON "#";center;
%9D%4e%f(%as%f)%d(l)%1#=#%ds%1. 

.END
A similar  modification must be made for
conditional expressions.

Before we get very far in applying functions to values
we must give a mathematical characterization of function definitions.
In this section we will handle
 handle λ-notation without free variables, postponing more complex
cases to {yonss(P90)}.

Assuming the only free variables in %9x%* are among the %3x%4i%1's
the denotation of %3λ[[x%41%*, ..., x%4n%*] %9x%1] in a specified
environment should be a function 
from %aS%8n%1 to %aS%* such that:
.BEGIN TABIT1(15);FILL;TURN ON "#";

\%9D%4a%f(%3λ[[%av%41%1,#...#%av%4n%1]%as%1]%f)%d(l)%1#=#
%d%9λ%d(x%41%*,#...,#%dx%4n%*)%9D%4e%f(%as%f)%d(l#:#<x%41%*,%dv%41%d>,#...,#<x%4n%*,%dv%4n%d>)%1
.END

where λ is the LISP λ-notation and %9λ%* is its mathematical counterpart and
%dv%4i%1 is the denotational counterpart of %av%4i%1.

In more detail:
%9λ%d(x%41%*, ... ,x%4n%*)e(x%41%*, ... ,x%4m%*) %1is a function %df%*: %aS%8n%1 → %aS%* such that:

.BEGIN TABIT1(15);GROUP;

\ | is %de(t%41%*, ... ,t%4n%*) %1if m%c≥%*n and %c∀%*i%dt%4i%1 %c≠%* %9B%1.
%df(t%41%*, ... t%4m%*)  %1\<
\ | is %9B%* otherwise

.END
Also, %d(l#:#<x%41%*,%dv%41%d>,#...,#<x%4n%*,%dv%4n%d>)%1 is a modification
of %dl%* such that each %dv%4i%1 is bound to  the corresponding %dx%4i%1.

.BEGIN TABIT2(20,40);GROUP;
That is:\%d(l#:#<x,v>)%1 is: %9λ%d(v%41%*)%2\if %dv = %9B%* %2then %9B%2 
\\else if %dv%41%* = %9B%2 then %9B%2
\\else if %dv%41%* = x%2 then %dv%2
\\else %dl(v%41%*)%1.

where: %2if%d p%* then %dq%* else %dr%1  is  %dif(p,q,r)%1.
.END


Analysis of our study will show that one of the  larger difficulties
was caused by our insistence in dealing with type-free languages. Self-application
is one indication of this. We can show that imposing a type structure on our language
will also solve many of the questions of non-termination. In a 
typed %9λ%1-calculus
an expression will always have a normal form#(⊗↑[Mor#68]↑). 
Computationally this means that all the programs will terminate.
Also models for typed %9λ%1-calculus are much more readily attained.
However  the type free calculus is a stronger system, and requiring all expressions
to have a consistent type structure rules out several useful constructs;
in particular, the %9λ%1-calculus counterpart to the LISP %3label%1 operator
cannot be consistently typed.

From the practical side, a typed structure is also a mixed blessing.
LISP programmers frequently miss the declarations of common programming languages.
Language delaarations are a form of typing and can be quite helpful in
pinpointing programming errors; declarations can also be used by compilers to 
help produce optimized code. However those same programmers want to subvert
the type structure, typically in the name of efficiency or expediency.
Also a type structure can be a real nuisance when trying to debug a program;
it is frequently desireable to  examine and 
modify the representations of abstract data structures. Those kinds of operations
imply the ability to ignore the type information.

Logically, the next addition to %7D%1 would involve recursion and function
definitions: %3label%1 and "<=".
We know that the LISP %3label%1 operator is similar to "<=", but
%3label%1 builds a temporary definition, while "<=" modifies the
environment. 
Programming language constructs which modify the environment
are said to have %2side-effects%1 and  are an instance
of what is called a imperative construct.
Since our main interest lies in the programming aspects, we will
pursue the  procedural aspects and postpone the mathematics.
The next chapter introduces the procedural aspects of 
imperative constructs and  in {yonss(P90)}
we will investigate some of the mathematical aspects of
"<=" and %3label%1.


.CENT(Problem)
.P271:
Let %dY%1 be the %9λ%1 expression %9λ%df.(%9λ%dh.f(h h))(%9λ%dh.f(h h)))%1.
Show that %d(YF)#=#F(YF)%1.